perm filename WEIZEN[S86,JMC] blob
sn#817801 filedate 1986-05-25 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 weizen[s86,jmc] Reply to Weizenbaum
C00003 00003 vijay@ernie.berkeley.edu
C00015 ENDMK
Cā;
weizen[s86,jmc] Reply to Weizenbaum
vijay@ernie.berkeley.edu
Reply to Weizenbaum
1. That's "intelligentsia".
2. The views expressed by Weizenbaum in these remarks are similar
to those in his book "Computer Power and Human Reason", and I have
commented on them at more than sufficient length in my review
of the book. Copies available on request.
3. The question of what psychological principles are embodied
in the new chess program is, I believe, dealt with in Berliner's
paper on Hitech.
vijay@ernie.berkeley.edu
Reply to Weizenbaum's second round.
Copyright 1986, John McCarthy
I don't understand the quotes around the words "scientific"
and "technical" as applied to "issues". Does Weizenbaum imply that
there are no scientific and technical issues even if he doesn't want
to discuss them? Or is this just a rhetorical excess?
I don't place any limit our eventual ability to understand
humans well enough to embody this understanding in a computer program.
I also don't place any limit on our eventual ability to make a computer
program that might learn by scientific experiment how humans operate
and might know more about humans than humans know, just as humans
know more about viruses, bacteria, dogs and apes than any of these
know about themselves.
However, it turns out that some people imagine that we are
further on in this study than we are. These people might give others
the impression that today's or next year's artificial intelligence
can surely "understand and produce wisdom with respect to
interpersonal, social and cultural human affairs, as occasionaly human
intelligence can."
Like all exaggerations, this is unfortunate, but I don't see it as
having any more tragic consequences than any others that we are always
encountering - perhaps less so that exaggerated expectations that some
disease is about to be cured. Perhaps someone will place excessive trust
in some expert system, but I haven't heard that has happened yet, and like
all similar errors of overconfidence, it will be rudely corrected. So far
excessive confidence in computer programs hasn't had results as
unfortunate as excessive confidence in the seals between the segments of
solid rocket boosters.
My opinion that AI will eventually succeed in a high-level
understanding of human affairs is just an opinion. I don't see it as a
problem that is ripe for attack. Maybe someone else (e.g. Minsky) has
ideas on how to attack it. Weizenbaum has the opinion we won't
succeed ever. As far as I can see the dispute is at the cogency of
arguments about whether humanity will eventually succeed in travelling to
other galaxies.
I don't know how applicable AI is to SDI. Given the
present state of AI, I hope it's not vital. My opinion is
that the SDI work, including the computer work, reduces the
probability of nuclear war as well as increasing the likely
number of survivors if it happens. Conversely, I think
that if Weizenbaum makes converts, the probability of nuclear
war will increase and the expected number of survivors be reduced.
Unless Weizenbaum deals with these opinions, probably
held by a majority of Americans as well as the
participants in defense research, his moral criticisms of
these participants are unwarranted. Perhaps he supposes that
these opinions are not sincerely held.
I also find the following somewhat misleading.
"But I would add that science is a social enterprise in another sense
as well. What is and what is not to be counted as scientific, as
fact is decided by a consensus of members of the relevant section of
the scientific community. Who is and who is not a member of a
particular section is similarly a social decision. It makes a
difference when someone who was once out is, as the communists say,
rehabilitated."
Its error is to ascribe a more decisive importance
to scientific public opinion that than that opinion has or even
claims. There are many sources of support, and people who think
that some activity, e.g. AI, is not scientific hardly ever
attempt witch hunts. The institution of academic freedom is
explicitly intended to protect people who go against the consensus.